State-of-the-art object detectors are fast and accurate, but they require a large amount of well annotated training data to obtain good performance. However, obtaining a large amount of training annotations specific to a particular task, i.e., fine-grained annotations, is costly in practice. In contrast, obtaining common-sense relationships from text, e.g., "a table-lamp is a lamp that sits on top of a table", is much easier. Additionally, common-sense relationships like "on-top-of" are easy to annotate in a task-agnostic fashion. In this paper, we propose a probabilistic model that uses such relational knowledge to transform an off-the-shelf detector of coarse object categories (e.g., "table", "lamp") into a detector of fine-grained categories (e.g., "table-lamp"). We demonstrate that our method, RelDetect, achieves performance competitive to finetuning based state-of-the-art object detector baselines when an extremely low amount of fine-grained annotations is available ($0.2\%$ of entire dataset). We also demonstrate that RelDetect is able to utilize the inherent transferability of relationship information to obtain a better performance ($+5$ mAP points) than the above baselines on an unseen dataset (zero-shot transfer). In summary, we demonstrate the power of using relationships for object detection on datasets where fine-grained object categories can be linked to coarse-grained categories via suitable relationships.
translated by 谷歌翻译
COVID-19导致与不同的SARS-COV-2变体相关的多种感染波。研究报告了这些变体对患者呼吸健康的影响不同。我们探索从COVID-19受试者收集的声学信号是否显示出可区分的声学模式,这表明有可能预测潜在的病毒变体。我们分析了从三个主题库中收集的COSWARA数据集,即i)健康,ii)在三角洲变体占主导地位期间记录的covid-199受试者,以及III)来自Omicron Expear中记录的COVID-19的数据。我们的发现表明,咳嗽,呼吸和语音等多种声音类别表明,在将COVID-19与Omicron和Delta变体进行比较时,声音特征差异很大。在曲线下,分类区域大大超过了被Omicron感染的受试者与三角洲感染者的机会。使用来自多个声音类别的得分融合,我们在95%的特异性下获得了89%和52.4%的敏感性的区域。此外,使用分层三类方法将声学数据分类为健康和共同-19阳性,并将进一步的COVID受试者分为三角洲和Omicron变体,从而提供了高水平的3类分类精度。这些结果提出了设计基于声音的COVID-19诊断方法的新方法。
translated by 谷歌翻译
COVID-19大流行已经加快了关于替代,快速有效的Covid-19诊断方法设计的研究。在本文中,我们描述了Coswara工具,这是一个网站应用程序,旨在通过分析呼吸声样本和健康症状来启用COVID-19检测。使用此服务的用户可以使用连接到Internet的任何设备登录到网站,提供当前的健康症状信息,并记录很少有对应于呼吸,咳嗽和语音的声音。在分析此信息上的一分钟内,网站工具将向用户输出COVID-19概率分数。随着COVID-19的大流行继续要求进行大规模和可扩展的人口水平测试,我们假设所提出的工具为此提供了潜在的解决方案。
translated by 谷歌翻译
神经网络和相关的深度学习方法目前处于用于分类对象的技术的前沿。但是,他们通常需要大量的时间和模型培训数据。他们学到的模型有时很难解释。在本文中,我们推进了FastMAPSVM(用于对复杂对象进行分类的可解释的机器学习框架),这是用于通用分类任务的神经网络的有利替代方法。 FastMAPSVM通过组合FastMap和SVM的互补强度,将支持矢量机(SVM)(SVM)的适用性扩展到具有复杂对象的域。 FastMap是一种有效的线性时间算法,该算法将复杂的对象映射到欧几里得空间中的指向,同时保留它们之间的成对域特异性距离。我们证明了FastMAPSVM在分类地震图的背景下的效率和有效性。我们表明,就精确,回忆和准确性而言,其性能与其他最先进的方法相当。但是,与其他方法相比,FastMAPSVM对模型培训的时间和数据量明显较小。它还提供了对象及其之间的分类边界的明显可视化。我们希望FastMAPSVM可行对于许多其他实际域中的分类任务。
translated by 谷歌翻译
超越地球轨道的人类空间勘探将涉及大量距离和持续时间的任务。为了有效减轻无数空间健康危害,数据和空间健康系统的范式转移是实现地球独立性的,而不是Earth-Reliance所必需的。有希望在生物学和健康的人工智能和机器学习领域的发展可以解决这些需求。我们提出了一个适当的自主和智能精密空间健康系统,可以监控,汇总和评估生物医学状态;分析和预测个性化不良健康结果;适应并响应新累积的数据;并提供对其船员医务人员的个人深度空间机组人员和迭代决策支持的预防性,可操作和及时的见解。在这里,我们介绍了美国国家航空航天局组织的研讨会的建议摘要,以便在太空生物学和健康中未来的人工智能应用。在未来十年,生物监测技术,生物标志科学,航天器硬件,智能软件和简化的数据管理必须成熟,并编织成精确的空间健康系统,以使人类在深空中茁壮成长。
translated by 谷歌翻译
空间生物学研究旨在了解太空飞行对生物的根本影响,制定支持深度空间探索的基础知识,最终生物工程航天器和栖息地稳定植物,农作物,微生物,动物和人类的生态系统,为持续的多行星寿命稳定。要提高这些目标,该领域利用了来自星空和地下模拟研究的实验,平台,数据和模型生物。由于研究扩展到低地球轨道之外,实验和平台必须是最大自主,光,敏捷和智能化,以加快知识发现。在这里,我们介绍了由美国国家航空航天局的人工智能,机器学习和建模应用程序组织的研讨会的建议摘要,这些应用程序为这些空间生物学挑战提供了关键解决方案。在未来十年中,将人工智能融入太空生物学领域将深化天空效应的生物学理解,促进预测性建模和分析,支持最大自主和可重复的实验,并有效地管理星载数据和元数据,所有目标使生活能够在深空中茁壮成长。
translated by 谷歌翻译
脑转移性疾病的治疗决策依赖于主要器官位点的知识,目前用活组织检查和组织学进行。在这里,我们开发了一种具有全脑MRI数据的准确非侵入性数字组织学的新型深度学习方法。我们的IRB批准的单网回顾性研究由患者(n = 1,399)组成,提及MRI治疗规划和伽马刀放射牢房超过19年。对比增强的T1加权和T2加权流体减毒的反转恢复脑MRI考试(n = 1,582)被预处理,并输入肿瘤细分,模态转移和主要部位分类的建议深度学习工作流程为五个课程之一(肺,乳腺,黑色素瘤,肾等)。十倍的交叉验证产生的总体AUC为0.947(95%CI:0.938,0.955),肺类AUC,0.899(95%CI:0.884,0.915),乳房类AUC为0.990(95%CI:0.983,0.997) ,黑色素瘤ACAC为0.882(95%CI:0.858,0.906),肾类AUC为0.870(95%CI:0.823,0.918),以及0.885的其他AUC(95%CI:0.843,0.949)。这些数据确定全脑成像特征是判别的,以便准确诊断恶性肿瘤的主要器官位点。我们的端到端深度射出方法具有巨大的分类来自全脑MRI图像的转移性肿瘤类型。进一步的细化可以提供一种无价的临床工具,以加快对精密治疗和改进的结果的原发性癌症现场鉴定。
translated by 谷歌翻译
The previous fine-grained datasets mainly focus on classification and are often captured in a controlled setup, with the camera focusing on the objects. We introduce the first Fine-Grained Vehicle Detection (FGVD) dataset in the wild, captured from a moving camera mounted on a car. It contains 5502 scene images with 210 unique fine-grained labels of multiple vehicle types organized in a three-level hierarchy. While previous classification datasets also include makes for different kinds of cars, the FGVD dataset introduces new class labels for categorizing two-wheelers, autorickshaws, and trucks. The FGVD dataset is challenging as it has vehicles in complex traffic scenarios with intra-class and inter-class variations in types, scale, pose, occlusion, and lighting conditions. The current object detectors like yolov5 and faster RCNN perform poorly on our dataset due to a lack of hierarchical modeling. Along with providing baseline results for existing object detectors on FGVD Dataset, we also present the results of a combination of an existing detector and the recent Hierarchical Residual Network (HRN) classifier for the FGVD task. Finally, we show that FGVD vehicle images are the most challenging to classify among the fine-grained datasets.
translated by 谷歌翻译
In the Earth's magnetosphere, there are fewer than a dozen dedicated probes beyond low-Earth orbit making in-situ observations at any given time. As a result, we poorly understand its global structure and evolution, the mechanisms of its main activity processes, magnetic storms, and substorms. New Artificial Intelligence (AI) methods, including machine learning, data mining, and data assimilation, as well as new AI-enabled missions will need to be developed to meet this Sparse Data challenge.
translated by 谷歌翻译
Reference-based Super-Resolution (Ref-SR) has recently emerged as a promising paradigm to enhance a low-resolution (LR) input image or video by introducing an additional high-resolution (HR) reference image. Existing Ref-SR methods mostly rely on implicit correspondence matching to borrow HR textures from reference images to compensate for the information loss in input images. However, performing local transfer is difficult because of two gaps between input and reference images: the transformation gap (e.g., scale and rotation) and the resolution gap (e.g., HR and LR). To tackle these challenges, we propose C2-Matching in this work, which performs explicit robust matching crossing transformation and resolution. 1) To bridge the transformation gap, we propose a contrastive correspondence network, which learns transformation-robust correspondences using augmented views of the input image. 2) To address the resolution gap, we adopt teacher-student correlation distillation, which distills knowledge from the easier HR-HR matching to guide the more ambiguous LR-HR matching. 3) Finally, we design a dynamic aggregation module to address the potential misalignment issue between input images and reference images. In addition, to faithfully evaluate the performance of Reference-based Image Super-Resolution under a realistic setting, we contribute the Webly-Referenced SR (WR-SR) dataset, mimicking the practical usage scenario. We also extend C2-Matching to Reference-based Video Super-Resolution task, where an image taken in a similar scene serves as the HR reference image. Extensive experiments demonstrate that our proposed C2-Matching significantly outperforms state of the arts on the standard CUFED5 benchmark and also boosts the performance of video SR by incorporating the C2-Matching component into Video SR pipelines.
translated by 谷歌翻译